Virtual Human Signing as Expressive Animation
نویسندگان
چکیده
We present an overview of research at UEA into the animation of sign language using a gesture notation, outlining applications that have been developed and key aspects of the implementation. We argue that the requirements for virtual human signing involve the development of expressive characters. Although the principal focus of work has been on sign language, we believe that the work can be generalised easily and has a strong contribution to make to future research on expressive characters. 1 Signing Research at UEA 1 in 1000 people become deaf before they have acquired speech and may always have a low reading age for written English. Sign is their natural language. British Sign Language (BSL) has its own grammar and linguistic structure that is not based on English. Sign language is expressive in its own right, and is multimodal, combining manual gestures, other bodily movements, and facial expressions. Facial information is especially important, conveying key semantic information. Just as intonation can affect the meaning of a sentence, for instance, turning a statement into a question, or indicating irony, so, facial gestures modify manual gestures in crucial ways. In addition, certain signs use the same manual content combined with different mouthings, often related to speech, to distinguish closely related concepts. Research at UEA addresses the linguistics of sign language, where little is documented about grammar and semantics, and explores generation of signing, using gesture notation. We have developed SiGML (Elliott et al., 2001) (Signing Gesture Markup Language) for representing sign language utterances. SiGML is used to generate realistic animation of signing using Virtual Human Avatars. The Animgen system (Kennaway, 2001) employs advanced techniques for skeletal animation to realise precise hand shapes and movements, leading to accurate bodily contacts. In addition, a range of facial gestures is animated by weighting morph targets giving appropriate displacements for facial mesh points. In collaboration with Televirtual Ltd, a local multimedia company we have developed description formats for specifying avatars and their streams of animation parameters. Complete systems have been produced allowing control of content animation using a range of avatars embedded in a range of applications including support on web pages. The framework integrates SiGML processing through Animgen, and supports a number of avatars developed separately at UEA and by Televirtual. Early work was based on signing captured via motion sensors, using blending techniques to concatenate motion sequences. Further work is based on capture via video, especially for facial expressions, providing a basis for recognising signs from motion data. 2 Virtual Signing Applications Since deaf people do not necessarily find information easy to absorb in text, their access to services is restricted, despite the requirements of recent legislation. There is little support for digital services in sign. Recent projects by colleagues at UEA include Simon the Signer (Pezeshkpour et al., 1999), winner of two Royal Television Society Awards, and TESSA (Cox et al., 2002), winner of the top BCS IT Award, undertaken within the EU ViSiCAST project (ViSiCAST, 2000). Both Simon the Signer and TESSA (see Figure 1) used motion captured signs that are blended into sequences on demand. 2.1 Simon the Signer Simon the Signer took words from a television subtitle stream and rendered a sequence of signs in Sign Supported English (SSE) to appear as an optional commentary on screen. SSE is widely used in education of deaf people, using a subset of BSL signs presented in English word order. Although technically successful, the use of SSE rather than true BSL was not fully accepted by the deaf community since it does not provide the required cultural richness. There are obvious benefits for broadcasters if signing can be generated from an existing low-bandwidth data
منابع مشابه
Individual Differences in Expressive Response: A Challenge for ECA Design (Short Paper)
To create realistic and expressive virtual humans, we need to develop better models of the processes and dynamics of human emotions and expressions. A first step in this effort is to develop means to systematically induce and capture realistic expressions in real humans. We conducted a series of studies on human emotions and facial expression using the Emotion Evoking Game (EVG) and a high-spee...
متن کاملPuppet Show: Intuitive Puppet Interfaces for Expressive Character Control
We argue that evolving game art like virtual performances and Machinima depend on better character control. That is why Puppet Show does not focus on new or optimized gameplay but aims to extend the expressive range of characters in an existing game world. Puppet Show is an interface plug-in for Epic’s Unreal Tournament 2004. Puppet Show uses an external Java program to feed visual input from a...
متن کاملA MPEG-4 Virtual Human Animation Engine for Interactive Web Based Applications
This paper presents a novel, MPEG-4 compliant animation engine (body player). It has been designed to synthesize virtual human full-body animations in interactive multimedia applications for the web. We believe that a full-body player can provide a more expressive and interesting interface than the use of animated faces only (talking heads). This is one of the first implementations of a MPEG-4 ...
متن کاملTowards interactive web-based signing avatars
A signing avatar is a virtual human capable of expressing natural language utterances while using Sign Language (SL). Currently there is a growing research in this field with a special attention directed towards the precise modeling of the actual functioning of the sign language in order to enhance the acceptability by a large class of deaf persons [1]. In this context, we are interested to mak...
متن کاملA Facial Animation System for Generating Complex Expressions
This paper presents a novel expressive facial animation system based on a motion captured data stored in a database. Unlike common data-driven facial animation systems which only use mo-cap data of one expression at a time, our system can use data of more than one expression. Thus, the system can create a complex expression when the virtual human is talking. Using our method, the resulting faci...
متن کامل